mvn archetype:generate \ -DarchetypeArtifactId=wildfly-subsystem \ -DarchetypeGroupId=org.wildfly.archetypes \ -DarchetypeVersion=8.0.0.Final \ -DarchetypeRepository=http://repository.jboss.org/nexus/content/groups/public
This document is intended for people who want to extend WildFly 9 to introduce new capabilities.
You should know how to download, install and run WildFly 9. If not please consult the Getting Started Guide. You should also be familiar with the management concepts from the Admin Guide, particularly the Core management concepts section and you need Java development experience to follow the example in this guide.
Most of the examples in this guide are being expressed as excerpts of the XML configuration files or by using a representation of the de-typed management model.
In this document we provide an example of how to extend the core functionality of WildFly 9 via an extension and the subsystem it installs. The WildFly 9 core is very simple and lightweight; most of the capabilities people associate with an application server are provided via extensions and their subsystems. The WildFly 9 distribution includes many extensions and subsystems; the webserver integration is via a subsystem; the transaction manager integration is via a subsystem, the EJB container integration is via a subsystem, etc.
This document is divided into two main sections. The first is focused on learning by doing. This section will walk you through the steps needed to create your own subsystem, and will touch on most of the concepts discussed elsewhere in this guide. The second focuses on a conceptual overview of the key interfaces and classes described in the example. Readers should feel free to start with the second section if that better fits their learning style. Jumping back and forth between the sections is also a good strategy.
Our example subsystem will keep track of all deployments of certain types containing a special marker file, and expose operations to see how long these deployments have been deployed.
To make your life easier we have provided a maven archetype which will create a skeleton project for implementing subsystems.
mvn archetype:generate \ -DarchetypeArtifactId=wildfly-subsystem \ -DarchetypeGroupId=org.wildfly.archetypes \ -DarchetypeVersion=8.0.0.Final \ -DarchetypeRepository=http://repository.jboss.org/nexus/content/groups/public
Maven will download the archetype and it's dependencies, and ask you some questions:
$ mvn archetype:generate \ -DarchetypeArtifactId=wildfly-subsystem \ -DarchetypeGroupId=org.wildfly.archetypes \ -DarchetypeVersion=8.0.0.Final \ -DarchetypeRepository=http://repository.jboss.org/nexus/content/groups/public [INFO] Scanning for projects... [INFO] [INFO] ------------------------------------------------------------------------ [INFO] Building Maven Stub Project (No POM) 1 [INFO] ------------------------------------------------------------------------ [INFO] ......... Define value for property 'groupId': : com.acme.corp Define value for property 'artifactId': : acme-subsystem Define value for property 'version': 1.0-SNAPSHOT: : Define value for property 'package': com.acme.corp: : com.acme.corp.tracker Define value for property 'module': : com.acme.corp.tracker [INFO] Using property: name = WildFly subsystem project Confirm properties configuration: groupId: com.acme.corp artifactId: acme-subsystem version: 1.0-SNAPSHOT package: com.acme.corp.tracker module: com.acme.corp.tracker name: WildFly subsystem project Y: : Y [INFO] ------------------------------------------------------------------------ [INFO] BUILD SUCCESS [INFO] ------------------------------------------------------------------------ [INFO] Total time: 1:42.563s [INFO] Finished at: Fri Jul 08 14:30:09 BST 2011 [INFO] Final Memory: 7M/81M [INFO] ------------------------------------------------------------------------ $
|
Instruction |
1 |
Enter the groupId you wish to use |
2 |
Enter the artifactId you wish to use |
3 |
Enter the version you wish to use, or just hit Enter if you wish to accept the default 1.0-SNAPSHOT |
4 |
Enter the java package you wish to use, or just hit Enter if you wish to accept the default (which is copied from groupId ). |
5 |
Enter the module name you wish to use for your extension. |
6 |
Finally, if you are happy with your choices, hit Enter and Maven will generate the project for you. |
You can also do this in Eclipse, see Creating your own application for more details. We now have a skeleton project that you can use to implement a subsystem. Import the acme-subsystem project into your favourite IDE. A nice side-effect of running this in the IDE is that you can see the javadoc of WildFly classes and interfaces imported by the skeleton code. If you do a mvn install in the project it will work if we plug it into WildFly 8, but before doing that we will change it to do something more useful.
The rest of this section modifies the skeleton project created by the archetype to do something more useful, and the full code can be found in acme-subsystem.zip.
If you do a mvn install in the created project, you will see some tests being run
$mvn install [INFO] Scanning for projects... [...] [INFO] Surefire report directory: /Users/kabir/sourcecontrol/temp/archetype-test/acme-subsystem/target/surefire-reports ------------------------------------------------------- T E S T S ------------------------------------------------------- Running com.acme.corp.tracker.extension.SubsystemBaseParsingTestCase Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.424 sec Running com.acme.corp.tracker.extension.SubsystemParsingTestCase Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.074 sec Results : Tests run: 3, Failures: 0, Errors: 0, Skipped: 0 [...]
We will talk about these later in the Testing the parsers section.
First, let us define the schema for our subsystem. Rename src/main/resources/schema/mysubsystem.xsd to src/main/resources/schema/acme.xsd. Then open acme.xsd and modify it to the following
<xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema" targetNamespace="urn:com.acme.corp.tracker:1.0" xmlns="urn:com.acme.corp.tracker:1.0" elementFormDefault="qualified" attributeFormDefault="unqualified" version="1.0"> <!-- The subsystem root element --> <xs:element name="subsystem" type="subsystemType"/> <xs:complexType name="subsystemType"> <xs:all> <xs:element name="deployment-types" type="deployment-typesType"/> </xs:all> </xs:complexType> <xs:complexType name="deployment-typesType"> <xs:choice minOccurs="0" maxOccurs="unbounded"> <xs:element name="deployment-type" type="deployment-typeType"/> </xs:choice> </xs:complexType> <xs:complexType name="deployment-typeType"> <xs:attribute name="suffix" use="required"/> <xs:attribute name="tick" type="xs:long" use="optional" default="10000"/> </xs:complexType> </xs:schema>
Now modify the com.acme.corp.tracker.extension.SubsystemExtension class to contain the new namespace.
public class SubsystemExtension implements Extension { /** The name space used for the {@code substystem} element */ public static final String NAMESPACE = "urn:com.acme.corp.tracker:1.0"; ...
The following example xml contains a valid subsystem configuration, we will see how to plug this in to WildFly 8 later in this tutorial.
<subsystem xmlns="urn:com.acme.corp.tracker:1.0"> <deployment-types> <deployment-type suffix="sar" tick="10000"/> <deployment-type suffix="war" tick="10000"/> </deployment-types> </subsystem>
Now when designing our model, we can either do a one to one mapping between the schema and the model or come up with something slightly or very different. To keep things simple, let us stay pretty true to the schema so that when executing a :read-resource(recursive=true) against our subsystem we'll see something like:
{ "outcome" => "success", "result" => {"type" => { "sar" => {"tick" => "10000"}, "war" => {"tick" => "10000"} }} }
We also need a name for our subsystem, to do that change com.acme.corp.tracker.extension.SubsystemExtension:
public class SubsystemExtension implements Extension { ... /** The name of our subsystem within the model. */ public static final String SUBSYSTEM_NAME = "tracker"; ...
The SubsystemExtension.initialize() method defines the model, currently it sets up the basics to add our subsystem to the model:
@Override public void initialize(ExtensionContext context) { //register subsystem with its model version final SubsystemRegistration subsystem = context.registerSubsystem(SUBSYSTEM_NAME, 1, 0); //register subsystem model with subsystem definition that defines all attributes and operations final ManagementResourceRegistration registration = subsystem.registerSubsystemModel(SubsystemDefinition.INSTANCE); //register describe operation, note that this can be also registered in SubsystemDefinition registration.registerOperationHandler(DESCRIBE, GenericSubsystemDescribeHandler.INSTANCE, GenericSubsystemDescribeHandler.INSTANCE, false, OperationEntry.EntryType.PRIVATE); //we can register additional submodels here // subsystem.registerXMLElementWriter(parser); }
Next we obtain a ManagementResourceRegistration by registering the subsystem model. This is a compulsory step for every new subsystem.
final ManagementResourceRegistration registration = subsystem.registerSubsystemModel(SubsystemDefinition.INSTANCE);
It's parameter is an implementation of the ResourceDefinition interface, which means that when you call /subsystem=tracker:read-resource-description the information you see comes from model that is defined by SubsystemDefinition.INSTANCE.
public class SubsystemDefinition extends SimpleResourceDefinition { public static final SubsystemDefinition INSTANCE = new SubsystemDefinition(); private SubsystemDefinition() { super(SubsystemExtension.SUBSYSTEM_PATH, SubsystemExtension.getResourceDescriptionResolver(null), //We always need to add an 'add' operation SubsystemAdd.INSTANCE, //Every resource that is added, normally needs a remove operation SubsystemRemove.INSTANCE); } @Override public void registerOperations(ManagementResourceRegistration resourceRegistration) { super.registerOperations(resourceRegistration); //you can register aditional operations here } @Override public void registerAttributes(ManagementResourceRegistration resourceRegistration) { //you can register attributes here } }
Since we need child resource type we need to add new ResourceDefinition,
The ManagementResourceRegistration obtained in SubsystemExtension.initialize() is then used to add additional operations or to register submodels to the /subsystem=tracker address. Every subsystem and resource must have an ADD method which can be achieved by the following line inside registerOperations in your ResourceDefinition or by providing it in constructor of your SimpleResourceDefinition just as we did in example above.
//We always need to add an 'add' operation resourceRegistration.registerOperationHandler(ADD, SubsystemAdd.INSTANCE, new DefaultResourceAddDescriptionProvider(resourceRegistration,descriptionResolver), false);
The parameters when registering an operation handler are:
The name - i.e. ADD.
The handler instance - we will talk more about this below
The handler description provider - we will talk more about this below.
Whether this operation handler is inherited - false means that this operation is not inherited, and will only apply to /subsystem=tracker. The content for this operation handler will be provided by 3.
Let us first look at the description provider which is quite simple since this operation takes no parameters. The addition of type children will be handled by another operation handler, as we will see later on.
There are two way to define DescriptionProvider, one is by defining it by hand using ModelNode, but as this has show to be very error prone there are lots of helper methods to help you automatically describe the model. Flowing example is done by manually defining Description provider for ADD operation handler
/** * Used to create the description of the subsystem add method */ public static DescriptionProvider SUBSYSTEM_ADD = new DescriptionProvider() { public ModelNode getModelDescription(Locale locale) { //The locale is passed in so you can internationalize the strings used in the descriptions final ModelNode subsystem = new ModelNode(); subsystem.get(OPERATION_NAME).set(ADD); subsystem.get(DESCRIPTION).set("Adds the tracker subsystem"); return subsystem; } };
Or you can use API that helps you do that for you. For Add and Remove methods there are classes DefaultResourceAddDescriptionProvider and DefaultResourceRemoveDescriptionProvider that do work for you. In case you use SimpleResourceDefinition even that part is hidden from you.
resourceRegistration.registerOperationHandler(ADD, SubsystemAdd.INSTANCE, new DefaultResourceAddDescriptionProvider(resourceRegistration,descriptionResolver), false);
resourceRegistration.registerOperationHandler(REMOVE, SubsystemRemove.INSTANCE, new DefaultResourceRemoveDescriptionProvider(resourceRegistration,descriptionResolver), false);
For other operation handlers that are not add/remove you can use DefaultOperationDescriptionProvider that takes additional parameter of what is the name of operation and optional array of parameters/attributes operation takes. This is an example to register operation "add-mime" with two parameters:
container.registerOperationHandler("add-mime", MimeMappingAdd.INSTANCE, new DefaultOperationDescriptionProvider("add-mime", Extension.getResourceDescriptionResolver("container.mime-mapping"), MIME_NAME, MIME_VALUE));
When descriping an operation its description provider's OPERATION_NAME must match the name used when calling ManagementResourceRegistration.registerOperationHandler()
Next we have the actual operation handler instance, note that we have changed its populateModel() method to initialize the type child of the model.
class SubsystemAdd extends AbstractBoottimeAddStepHandler { static final SubsystemAdd INSTANCE = new SubsystemAdd(); private SubsystemAdd() { } /** {@inheritDoc} */ @Override protected void populateModel(ModelNode operation, ModelNode model) throws OperationFailedException { log.info("Populating the model"); //Initialize the 'type' child node model.get("type").setEmptyObject(); } ....
SubsystemAdd also has a performBoottime() method which is used for initializing the deployer chain associated with this subsystem. We will talk about the deployers later on. However, the basic idea for all operation handlers is that we do any model updates before changing the actual runtime state.
The rule of thumb is that every thing that can be added, can also be removed so we have a remove handler for the subsystem registered
in SubsystemDefinition.registerOperations or just provide the operation handler in constructor.
//Every resource that is added, normally needs a remove operation registration.registerOperationHandler(REMOVE, SubsystemRemove.INSTANCE, DefaultResourceRemoveDescriptionProvider(resourceRegistration,descriptionResolver) , false);
SubsystemRemove extends AbstractRemoveStepHandler which takes care of removing the resource from the model so we don't need to override its performRemove() operation, also the add handler did not install any services (services will be discussed later) so we can delete the performRuntime() method generated by the archetype.
class SubsystemRemove extends AbstractRemoveStepHandler { static final SubsystemRemove INSTANCE = new SubsystemRemove(); private final Logger log = Logger.getLogger(SubsystemRemove.class); private SubsystemRemove() { } }
The description provider for the remove operation is simple and quite similar to that of the add handler where just name of the method changes.
The type child does not exist in our skeleton project so we need to implement the operations to add and remove them from the model.
First we need an add operation to add the type child, create a class called com.acme.corp.tracker.extension.TypeAddHandler. In this case we extend the org.jboss.as.controller.AbstractAddStepHandler class and implement the org.jboss.as.controller.descriptions.DescriptionProvider interface. org.jboss.as.controller.OperationStepHandler is the main interface for the operation handlers, and AbstractAddStepHandler is an implementation of that which does the plumbing work for adding a resource to the model.
class TypeAddHandler extends AbstractAddStepHandler implements DescriptionProvider { public static final TypeAddHandler INSTANCE = new TypeAddHandler(); private TypeAddHandler() { }
Then we define subsystem model. Lets call it TypeDefinition and for ease of use let it extend SimpleResourceDefinition instead just implement ResourceDefinition.
public class TypeDefinition extends SimpleResourceDefinition { public static final TypeDefinition INSTANCE = new TypeDefinition(); //we define attribute named tick protected static final SimpleAttributeDefinition TICK = new SimpleAttributeDefinitionBuilder(TrackerExtension.TICK, ModelType.LONG) .setAllowExpression(true) .setXmlName(TrackerExtension.TICK) .setFlags(AttributeAccess.Flag.RESTART_ALL_SERVICES) .setDefaultValue(new ModelNode(1000)) .setAllowNull(false) .build(); private TypeDefinition(){ super(TYPE_PATH, TrackerExtension.getResourceDescriptionResolver(TYPE),TypeAdd.INSTANCE,TypeRemove.INSTANCE); } @Override public void registerAttributes(ManagementResourceRegistration resourceRegistration){ resourceRegistration.registerReadWriteAttribute(TICK, null, TrackerTickHandler.INSTANCE); } }
Then we do the work of updating the model by implementing the populateModel() method from the AbstractAddStepHandler, which populates the model's attribute from the operation parameters. First we get hold of the model relative to the address of this operation (we will see later that we will register it against /subsystem=tracker/type=*), so we just specify an empty relative address, and we then populate our model with the parameters from the operation. There is operation validateAndSet on AttributeDefinition that helps us validate and set the model based on definition of the attribute.
@Override protected void populateModel(ModelNode operation, ModelNode model) throws OperationFailedException { TICK.validateAndSet(operation,model); }
We then override the performRuntime() method to perform our runtime changes, which in this case involves installing a service into the controller at the heart of WildFly 8. (AbstractAddStepHandler.performRuntime() is similar to AbstractBoottimeAddStepHandler.performBoottime() in that the model is updated before runtime changes are made.
@Override protected void performRuntime(OperationContext context, ModelNode operation, ModelNode model, ServiceVerificationHandler verificationHandler, List<ServiceController<?>> newControllers) throws OperationFailedException { String suffix = PathAddress.pathAddress(operation.get(ModelDescriptionConstants.ADDRESS)).getLastElement().getValue(); long tick = TICK.resolveModelAttribute(context,model).asLong(); TrackerService service = new TrackerService(suffix, tick); ServiceName name = TrackerService.createServiceName(suffix); ServiceController<TrackerService> controller = context.getServiceTarget() .addService(name, service) .addListener(verificationHandler) .setInitialMode(Mode.ACTIVE) .install(); newControllers.add(controller); } }
Since the add methods will be of the format /subsystem=tracker/suffix=war:add(tick=1234), we look for the last element of the operation address, which is war in the example just given and use that as our suffix. We then create an instance of TrackerService and install that into the service target of the context and add the created service controller to the newControllers list.
The tracker service is quite simple. All services installed into WildFly 8 must implement the org.jboss.msc.service.Service interface.
public class TrackerService implements Service<TrackerService>{
We then have some fields to keep the tick count and a thread which when run outputs all the deployments registered with our service.
private AtomicLong tick = new AtomicLong(10000); private Set<String> deployments = Collections.synchronizedSet(new HashSet<String>()); private Set<String> coolDeployments = Collections.synchronizedSet(new HashSet<String>()); private final String suffix; private Thread OUTPUT = new Thread() { @Override public void run() { while (true) { try { Thread.sleep(tick.get()); System.out.println("Current deployments deployed while " + suffix + " tracking active:\n" + deployments + "\nCool: " + coolDeployments.size()); } catch (InterruptedException e) { interrupted(); break; } } } }; public TrackerService(String suffix, long tick) { this.suffix = suffix; this.tick.set(tick); }
Next we have three methods which come from the Service interface. getValue() returns this service, start() is called when the service is started by the controller, stop is called when the service is stopped by the controller, and they start and stop the thread outputting the deployments.
@Override public TrackerService getValue() throws IllegalStateException, IllegalArgumentException { return this; } @Override public void start(StartContext context) throws StartException { OUTPUT.start(); } @Override public void stop(StopContext context) { OUTPUT.interrupt(); }
Next we have a utility method to create the ServiceName which is used to register the service in the controller.
public static ServiceName createServiceName(String suffix) { return ServiceName.JBOSS.append("tracker", suffix); }
Finally we have some methods to add and remove deployments, and to set and read the tick. The 'cool' deployments will be explained later.
public void addDeployment(String name) { deployments.add(name); } public void addCoolDeployment(String name) { coolDeployments.add(name); } public void removeDeployment(String name) { deployments.remove(name); coolDeployments.remove(name); } void setTick(long tick) { this.tick.set(tick); } public long getTick() { return this.tick.get(); } }//TrackerService - end
Since we are able to add type children, we need a way to be able to remove them, so we create a com.acme.corp.tracker.extension.TypeRemoveHandler. In this case we extend AbstractRemoveStepHandler which takes care of removing the resource from the model so we don't need to override its performRemove() operationa. But we need to implement the DescriptionProvider method to provide the model description, and since the add handler installs the TrackerService, we need to remove that in the performRuntime() method.
public class TypeRemoveHandler extends AbstractRemoveStepHandler { public static final TypeRemoveHandler INSTANCE = new TypeRemoveHandler(); private TypeRemoveHandler() { } @Override protected void performRuntime(OperationContext context, ModelNode operation, ModelNode model) throws OperationFailedException { String suffix = PathAddress.pathAddress(operation.get(ModelDescriptionConstants.ADDRESS)).getLastElement().getValue(); ServiceName name = TrackerService.createServiceName(suffix); context.removeService(name); } }
We then need a description provider for the type part of the model itself, so we modify TypeDefinitnion to registerAttribute
class TypeDefinition{ ... @Override public void registerAttributes(ManagementResourceRegistration resourceRegistration){ resourceRegistration.registerReadWriteAttribute(TICK, null, TrackerTickHandler.INSTANCE); } }
Then finally we need to specify that our new type child and associated handlers go under /subsystem=tracker/type=* in the model by adding registering it with the model in SubsystemExtension.initialize(). So we add the following just before the end of the method.
@Override public void initialize(ExtensionContext context) { final SubsystemRegistration subsystem = context.registerSubsystem(SUBSYSTEM_NAME, 1, 0); final ManagementResourceRegistration registration = subsystem.registerSubsystemModel(TrackerSubsystemDefinition.INSTANCE); //Add the type child ManagementResourceRegistration typeChild = registration.registerSubModel(TypeDefinition.INSTANCE); subsystem.registerXMLElementWriter(parser); }
The above first creates a child of our main subsystem registration for the relative address type=*, and gets the typeChild registration.
To this we add the TypeAddHandler and TypeRemoveHandler.
The add variety is added under the name add and the remove handler under the name remove, and for each registered operation handler we use the handler singleton instance as both the handler parameter and as the DescriptionProvider.
Finally, we register tick as a read/write attribute, the null parameter means we don't do anything special with regards to reading it, for the write handler we supply it with an operation handler called TrackerTickHandler.
Registering it as a read/write attribute means we can use the :write-attribute operation to modify the value of the parameter, and it will be handled by TrackerTickHandler.
Not registering a write attribute handler makes the attribute read only.
TrackerTickHandler extends AbstractWriteAttributeHandler
directly, and so must implement its applyUpdateToRuntime and revertUpdateToRuntime method.
This takes care of model manipulation (validation, setting) but leaves us to do just to deal with what we need to do.
class TrackerTickHandler extends AbstractWriteAttributeHandler<Void> { public static final TrackerTickHandler INSTANCE = new TrackerTickHandler(); private TrackerTickHandler() { super(TypeDefinition.TICK); } protected boolean applyUpdateToRuntime(OperationContext context, ModelNode operation, String attributeName, ModelNode resolvedValue, ModelNode currentValue, HandbackHolder<Void> handbackHolder) throws OperationFailedException { modifyTick(context, operation, resolvedValue.asLong()); return false; } protected void revertUpdateToRuntime(OperationContext context, ModelNode operation, String attributeName, ModelNode valueToRestore, ModelNode valueToRevert, Void handback){ modifyTick(context, operation, valueToRestore.asLong()); } private void modifyTick(OperationContext context, ModelNode operation, long value) throws OperationFailedException { final String suffix = PathAddress.pathAddress(operation.get(ModelDescriptionConstants.ADDRESS)).getLastElement().getValue(); TrackerService service = (TrackerService) context.getServiceRegistry(true).getRequiredService(TrackerService.createServiceName(suffix)).getValue(); service.setTick(value); } }
The operation used to execute this will be of the form /subsystem=tracker/type=war:write-attribute(name=tick,value=12345) so we first get the suffix from the operation address, and the tick value from the operation parameter's resolvedValue parameter, and use that to update the model.
We then add a new step associated with the RUNTIME stage to update the tick of the TrackerService for our suffix. This is essential since the call to context.getServiceRegistry() will fail unless the step accessing it belongs to the RUNTIME stage.
When implementing execute(), you must call context.completeStep() when you are done.
JBoss AS 7 uses the Stax API to parse the xml files. This is initialized in SubsystemExtension by mapping our parser onto our namespace:
public class SubsystemExtension implements Extension { /** The name space used for the {@code subsystem} element */ public static final String NAMESPACE = "urn:com.acme.corp.tracker:1.0"; ... protected static final PathElement SUBSYSTEM_PATH = PathElement.pathElement(SUBSYSTEM, SUBSYSTEM_NAME); protected static final PathElement TYPE_PATH = PathElement.pathElement(TYPE); /** The parser used for parsing our subsystem */ private final SubsystemParser parser = new SubsystemParser(); @Override public void initializeParsers(ExtensionParsingContext context) { context.setSubsystemXmlMapping(NAMESPACE, parser); } ...
We then need to write the parser. The contract is that we read our subsystem's xml and create the operations that will populate the model with the state contained in the xml. These operations will then be executed on our behalf as part of the parsing process. The entry point is the readElement() method.
public class SubsystemExtension implements Extension { /** * The subsystem parser, which uses stax to read and write to and from xml */ private static class SubsystemParser implements XMLStreamConstants, XMLElementReader<List<ModelNode>>, XMLElementWriter<SubsystemMarshallingContext> { /** {@inheritDoc} */ @Override public void readElement(XMLExtendedStreamReader reader, List<ModelNode> list) throws XMLStreamException { // Require no attributes ParseUtils.requireNoAttributes(reader); //Add the main subsystem 'add' operation final ModelNode subsystem = new ModelNode(); subsystem.get(OP).set(ADD); subsystem.get(OP_ADDR).set(PathAddress.pathAddress(SUBSYSTEM_PATH).toModelNode()); list.add(subsystem); //Read the children while (reader.hasNext() && reader.nextTag() != END_ELEMENT) { if (!reader.getLocalName().equals("deployment-types")) { throw ParseUtils.unexpectedElement(reader); } while (reader.hasNext() && reader.nextTag() != END_ELEMENT) { if (reader.isStartElement()) { readDeploymentType(reader, list); } } } } private void readDeploymentType(XMLExtendedStreamReader reader, List<ModelNode> list) throws XMLStreamException { if (!reader.getLocalName().equals("deployment-type")) { throw ParseUtils.unexpectedElement(reader); } ModelNode addTypeOperation = new ModelNode(); addTypeOperation.get(OP).set(ModelDescriptionConstants.ADD); String suffix = null; for (int i = 0; i < reader.getAttributeCount(); i++) { String attr = reader.getAttributeLocalName(i); String value = reader.getAttributeValue(i); if (attr.equals("tick")) { TypeDefinition.TICK.parseAndSetParameter(value, addTypeOperation, reader); } else if (attr.equals("suffix")) { suffix = value; } else { throw ParseUtils.unexpectedAttribute(reader, i); } } ParseUtils.requireNoContent(reader); if (suffix == null) { throw ParseUtils.missingRequiredElement(reader, Collections.singleton("suffix")); } //Add the 'add' operation for each 'type' child PathAddress addr = PathAddress.pathAddress(SUBSYSTEM_PATH, PathElement.pathElement(TYPE, suffix)); addTypeOperation.get(OP_ADDR).set(addr.toModelNode()); list.add(addTypeOperation); } ...
So in the above we always create the add operation for our subsystem. Due to its address /subsystem=tracker defined by SUBSYSTEM_PATH this will trigger the SubsystemAddHandler we created earlier when we invoke /subsystem=tracker:add. We then parse the child elements and create an add operation for the child address for each type child. Since the address will for example be /subsystem=tracker/type=sar (defined by TYPE_PATH ) and TypeAddHandler is registered for all type subaddresses the TypeAddHandler will get invoked for those operations. Note that when we are parsing attribute tick we are using definition of attribute that we defined in TypeDefintion to parse attribute value and apply all rules that we specified for this attribute, this also enables us to property support expressions on attributes.
The parser is also used to marshal the model to xml whenever something modifies the model, for which the entry point is the writeContent() method:
private static class SubsystemParser implements XMLStreamConstants, XMLElementReader<List<ModelNode>>, XMLElementWriter<SubsystemMarshallingContext> { ... /** {@inheritDoc} */ @Override public void writeContent(final XMLExtendedStreamWriter writer, final SubsystemMarshallingContext context) throws XMLStreamException { //Write out the main subsystem element context.startSubsystemElement(TrackerExtension.NAMESPACE, false); writer.writeStartElement("deployment-types"); ModelNode node = context.getModelNode(); ModelNode type = node.get(TYPE); for (Property property : type.asPropertyList()) { //write each child element to xml writer.writeStartElement("deployment-type"); writer.writeAttribute("suffix", property.getName()); ModelNode entry = property.getValue(); TypeDefinition.TICK.marshallAsAttribute(entry, true, writer); writer.writeEndElement(); } //End deployment-types writer.writeEndElement(); //End subsystem writer.writeEndElement(); } }
Then we have to implement the SubsystemDescribeHandler which translates the current state of the model into operations similar to the ones created by the parser. The SubsystemDescribeHandler is only used when running in a managed domain, and is used when the host controller queries the domain controller for the configuration of the profile used to start up each server. In our case the SubsystemDescribeHandler adds the operation to add the subsystem and then adds the operation to add each type child. Since we are using ResourceDefinitinon for defining subsystem all that is generated for us, but if you want to customize that you can do it by implementing it like this.
private static class SubsystemDescribeHandler implements OperationStepHandler, DescriptionProvider { static final SubsystemDescribeHandler INSTANCE = new SubsystemDescribeHandler(); public void execute(OperationContext context, ModelNode operation) throws OperationFailedException { //Add the main operation context.getResult().add(createAddSubsystemOperation()); //Add the operations to create each child ModelNode node = context.readModel(PathAddress.EMPTY_ADDRESS); for (Property property : node.get("type").asPropertyList()) { ModelNode addType = new ModelNode(); addType.get(OP).set(ModelDescriptionConstants.ADD); PathAddress addr = PathAddress.pathAddress(SUBSYSTEM_PATH, PathElement.pathElement("type", property.getName())); addType.get(OP_ADDR).set(addr.toModelNode()); if (property.getValue().hasDefined("tick")) { TypeDefinition.TICK.validateAndSet(property,addType); } context.getResult().add(addType); } context.completeStep(); } }
The testing framework was moved from the archetype into the core JBoss AS 7 sources between JBoss AS 7.0.0 and JBoss AS 7.0.1, and has been improved upon and is used internally for testing JBoss AS 7's subsystems. The differences between the two versions is that in 7.0.0.Final the testing framework is bundled with the code generated by the archetype (in a sub-package of the package specified for your subsystem, e.g. com.acme.corp.tracker.support), and the test extends the AbstractParsingTest class.
From 7.0.1 the testing framework is now brought in via the org.jboss.as:jboss-as-subsystem-test maven artifact, and the test's superclass is org.jboss.as.subsystem.test.AbstractSubsystemTest. The concepts are the same but more and more functionality will be available as JBoss AS 7 is developed.
Now that we have modified our parsers we need to update our tests to reflect the new model. There are currently three tests testing the basic functionality, something which is a lot easier to debug from your IDE before you plug it into the application server. We will talk about these tests in turn and they all live in com.acme.corp.tracker.extension.SubsystemParsingTestCase. SubsystemParsingTestCase extends AbstractSubsystemTest which does a lot of the setup for you and contains utility methods for verifying things from your test. See the javadoc of that class for more information about the functionality available to you. And by all means feel free to add more tests for your subsystem, here we are only testing for the best case scenario while you will probably want to throw in a few tests for edge cases.
The first test we need to modify is testParseSubsystem(). It tests that the parsed xml becomes the expected operations that will be parsed into the server, so let us tweak this test to match our subsystem. First we tell the test to parse the xml into operations
@Test public void testParseSubsystem() throws Exception { //Parse the subsystem xml into operations String subsystemXml = "<subsystem xmlns=\"" + SubsystemExtension.NAMESPACE + "\">" + " <deployment-types>" + " <deployment-type suffix=\"tst\" tick=\"12345\"/>" + " </deployment-types>" + "</subsystem>"; List<ModelNode> operations = super.parse(subsystemXml);
There should be one operation for adding the subsystem itself and an operation for adding the deployment-type, so check we got two operations
///Check that we have the expected number of operations Assert.assertEquals(2, operations.size());
Now check that the first operation is add for the address /subsystem=tracker:
//Check that each operation has the correct content //The add subsystem operation will happen first ModelNode addSubsystem = operations.get(0); Assert.assertEquals(ADD, addSubsystem.get(OP).asString()); PathAddress addr = PathAddress.pathAddress(addSubsystem.get(OP_ADDR)); Assert.assertEquals(1, addr.size()); PathElement element = addr.getElement(0); Assert.assertEquals(SUBSYSTEM, element.getKey()); Assert.assertEquals(SubsystemExtension.SUBSYSTEM_NAME, element.getValue());
Then check that the second operation is add for the address /subsystem=tracker, and that 12345 was picked up for the value of the tick parameter:
//Then we will get the add type operation ModelNode addType = operations.get(1); Assert.assertEquals(ADD, addType.get(OP).asString()); Assert.assertEquals(12345, addType.get("tick").asLong()); addr = PathAddress.pathAddress(addType.get(OP_ADDR)); Assert.assertEquals(2, addr.size()); element = addr.getElement(0); Assert.assertEquals(SUBSYSTEM, element.getKey()); Assert.assertEquals(SubsystemExtension.SUBSYSTEM_NAME, element.getValue()); element = addr.getElement(1); Assert.assertEquals("type", element.getKey()); Assert.assertEquals("tst", element.getValue()); }
The second test we need to modify is testInstallIntoController() which tests that the xml installs properly into the controller. In other words we are making sure that the add operations we created earlier work properly. First we create the xml and install it into the controller. Behind the scenes this will parse the xml into operations as we saw in the last test, but it will also create a new controller and boot that up using the created operations
@Test public void testInstallIntoController() throws Exception { //Parse the subsystem xml and install into the controller String subsystemXml = "<subsystem xmlns=\"" + SubsystemExtension.NAMESPACE + "\">" + " <deployment-types>" + " <deployment-type suffix=\"tst\" tick=\"12345\"/>" + " </deployment-types>" + "</subsystem>"; KernelServices services = super.installInController(subsystemXml);
The returned KernelServices allow us to execute operations on the controller, and to read the whole model.
//Read the whole model and make sure it looks as expected ModelNode model = services.readWholeModel(); //Useful for debugging :-) //System.out.println(model);
Now we make sure that the structure of the model within the controller has the expected format and values
Assert.assertTrue(model.get(SUBSYSTEM).hasDefined(SubsystemExtension.SUBSYSTEM_NAME)); Assert.assertTrue(model.get(SUBSYSTEM, SubsystemExtension.SUBSYSTEM_NAME).hasDefined("type")); Assert.assertTrue(model.get(SUBSYSTEM, SubsystemExtension.SUBSYSTEM_NAME, "type").hasDefined("tst")); Assert.assertTrue(model.get(SUBSYSTEM, SubsystemExtension.SUBSYSTEM_NAME, "type", "tst").hasDefined("tick")); Assert.assertEquals(12345, model.get(SUBSYSTEM, SubsystemExtension.SUBSYSTEM_NAME, "type", "tst", "tick").asLong()); }
The last test provided is called testParseAndMarshalModel(). It's main purpose is to make sure that our SubsystemParser.writeContent() works as expected. This is achieved by starting a controller in the same way as before
@Test public void testParseAndMarshalModel() throws Exception { //Parse the subsystem xml and install into the first controller String subsystemXml = "<subsystem xmlns=\"" + SubsystemExtension.NAMESPACE + "\">" + " <deployment-types>" + " <deployment-type suffix=\"tst\" tick=\"12345\"/>" + " </deployment-types>" + "</subsystem>"; KernelServices servicesA = super.installInController(subsystemXml);
Now we read the model and the xml that was persisted from the first controller, and use that xml to start a second controller
//Get the model and the persisted xml from the first controller ModelNode modelA = servicesA.readWholeModel(); String marshalled = servicesA.getPersistedSubsystemXml(); //Install the persisted xml from the first controller into a second controller KernelServices servicesB = super.installInController(marshalled);
Finally we read the model from the second controller, and make sure that the models are identical by calling compare() on the test superclass.
ModelNode modelB = servicesB.readWholeModel(); //Make sure the models from the two controllers are identical super.compare(modelA, modelB); }
We then have a test that needs no changing from what the archetype provides us with. As we have seen before we start a controller
@Test public void testDescribeHandler() throws Exception { //Parse the subsystem xml and install into the first controller String subsystemXml = "<subsystem xmlns=\"" + SubsystemExtension.NAMESPACE + "\">" + "</subsystem>"; KernelServices servicesA = super.installInController(subsystemXml);
We then call /subsystem=tracker:describe which outputs the subsystem as operations needed to reach the current state (Done by our SubsystemDescribeHandler)
//Get the model and the describe operations from the first controller ModelNode modelA = servicesA.readWholeModel(); ModelNode describeOp = new ModelNode(); describeOp.get(OP).set(DESCRIBE); describeOp.get(OP_ADDR).set( PathAddress.pathAddress( PathElement.pathElement(SUBSYSTEM, SubsystemExtension.SUBSYSTEM_NAME)).toModelNode()); List<ModelNode> operations = super.checkResultAndGetContents(servicesA.executeOperation(describeOp)).asList();
Then we create a new controller using those operations
//Install the describe options from the first controller into a second controller KernelServices servicesB = super.installInController(operations);
And then we read the model from the second controller and make sure that the two subsystems are identical
ModelNode modelB = servicesB.readWholeModel();
//Make sure the models from the two controllers are identical super.compare(modelA, modelB); }
To test the removal of the the subsystem and child resources we modify the testSubsystemRemoval() test provided by the archetype:
/** * Tests that the subsystem can be removed */ @Test public void testSubsystemRemoval() throws Exception { //Parse the subsystem xml and install into the first controller
We provide xml for the subsystem installing a child, which in turn installs a TrackerService
String subsystemXml = "<subsystem xmlns=\"" + SubsystemExtension.NAMESPACE + "\">" + " <deployment-types>" + " <deployment-type suffix=\"tst\" tick=\"12345\"/>" + " </deployment-types>" + "</subsystem>"; KernelServices services = super.installInController(subsystemXml);
Having installed the xml into the controller we make sure the TrackerService is there
//Sanity check to test the service for 'tst' was there services.getContainer().getRequiredService(TrackerService.createServiceName("tst"));
This call from the subsystem test harness will call remove for each level in our subsystem, children first and validate
that the subsystem model is empty at the end.
//Checks that the subsystem was removed from the model super.assertRemoveSubsystemResources(services);
Finally we check that all the services were removed by the remove handlers
//Check that any services that were installed were removed here try { services.getContainer().getRequiredService(TrackerService.createServiceName("tst")); Assert.fail("Should have removed services"); } catch (Exception expected) { } }
For good measure let us throw in another test which adds a deployment-type and also changes its attribute at runtime. So first of all boot up the controller with the same xml we have been using so far
@Test public void testExecuteOperations() throws Exception { String subsystemXml = "<subsystem xmlns=\"" + SubsystemExtension.NAMESPACE + "\">" + " <deployment-types>" + " <deployment-type suffix=\"tst\" tick=\"12345\"/>" + " </deployment-types>" + "</subsystem>"; KernelServices services = super.installInController(subsystemXml);
Now create an operation which does the same as the following CLI command /subsystem=tracker/type=foo:add(tick=1000)
//Add another type PathAddress fooTypeAddr = PathAddress.pathAddress( PathElement.pathElement(SUBSYSTEM, SubsystemExtension.SUBSYSTEM_NAME), PathElement.pathElement("type", "foo")); ModelNode addOp = new ModelNode(); addOp.get(OP).set(ADD); addOp.get(OP_ADDR).set(fooTypeAddr.toModelNode()); addOp.get("tick").set(1000);
Execute the operation and make sure it was successful
ModelNode result = services.executeOperation(addOp); Assert.assertEquals(SUCCESS, result.get(OUTCOME).asString());
Read the whole model and make sure that the original data is still there (i.e. the same as what was done by testInstallIntoController()
ModelNode model = services.readWholeModel(); Assert.assertTrue(model.get(SUBSYSTEM).hasDefined(SubsystemExtension.SUBSYSTEM_NAME)); Assert.assertTrue(model.get(SUBSYSTEM, SubsystemExtension.SUBSYSTEM_NAME).hasDefined("type")); Assert.assertTrue(model.get(SUBSYSTEM, SubsystemExtension.SUBSYSTEM_NAME, "type").hasDefined("tst")); Assert.assertTrue(model.get(SUBSYSTEM, SubsystemExtension.SUBSYSTEM_NAME, "type", "tst").hasDefined("tick")); Assert.assertEquals(12345, model.get(SUBSYSTEM, SubsystemExtension.SUBSYSTEM_NAME, "type", "tst", "tick").asLong());
Then make sure our new type has been added:
Assert.assertTrue(model.get(SUBSYSTEM, SubsystemExtension.SUBSYSTEM_NAME, "type").hasDefined("foo")); Assert.assertTrue(model.get(SUBSYSTEM, SubsystemExtension.SUBSYSTEM_NAME, "type", "foo").hasDefined("tick")); Assert.assertEquals(1000, model.get(SUBSYSTEM, SubsystemExtension.SUBSYSTEM_NAME, "type", "foo", "tick").asLong());
Then we call write-attribute to change the tick value of /subsystem=tracker/type=foo:
//Call write-attribute ModelNode writeOp = new ModelNode(); writeOp.get(OP).set(WRITE_ATTRIBUTE_OPERATION); writeOp.get(OP_ADDR).set(fooTypeAddr.toModelNode()); writeOp.get(NAME).set("tick"); writeOp.get(VALUE).set(3456); result = services.executeOperation(writeOp); Assert.assertEquals(SUCCESS, result.get(OUTCOME).asString());
To give you exposure to other ways of doing things, now instead of reading the whole model to check the attribute, we call read-attribute instead, and make sure it has the value we set it to.
//Check that write attribute took effect, this time by calling read-attribute instead of reading the whole model ModelNode readOp = new ModelNode(); readOp.get(OP).set(READ_ATTRIBUTE_OPERATION); readOp.get(OP_ADDR).set(fooTypeAddr.toModelNode()); readOp.get(NAME).set("tick"); result = services.executeOperation(readOp); Assert.assertEquals(3456, checkResultAndGetContents(result).asLong());
Since each type installs its own copy of TrackerService, we get the TrackerService for type=foo from the service container exposed by the kernel services and make sure it has the right value
TrackerService service = (TrackerService)services.getContainer().getService(TrackerService.createServiceName("foo")).getValue(); Assert.assertEquals(3456, service.getTick()); }
TypeDefinition.TICK.
When discussing SubsystemAddHandler we did not mention the work done to install the deployers, which is done in the following method:
@Override public void performBoottime(OperationContext context, ModelNode operation, ModelNode model, ServiceVerificationHandler verificationHandler, List<ServiceController<?>> newControllers) throws OperationFailedException { log.info("Populating the model"); //Add deployment processors here //Remove this if you don't need to hook into the deployers, or you can add as many as you like //see SubDeploymentProcessor for explanation of the phases context.addStep(new AbstractDeploymentChainStep() { public void execute(DeploymentProcessorTarget processorTarget) { processorTarget.addDeploymentProcessor(SubsystemDeploymentProcessor.PHASE, SubsystemDeploymentProcessor.priority, new SubsystemDeploymentProcessor()); } }, OperationContext.Stage.RUNTIME); }
This adds an extra step which is responsible for installing deployment processors. You can add as many as you like, or avoid adding any all together depending on your needs. Each processor has a Phase and a priority. Phases are sequential, and a deployment passes through each phases deployment processors. The priority specifies where within a phase the processor appears. See org.jboss.as.server.deployment.Phase for more information about phases.
In our case we are keeping it simple and staying with one deployment processor with the phase and priority created for us by the maven archetype. The phases will be explained in the next section. The deployment processor is as follows:
public class SubsystemDeploymentProcessor implements DeploymentUnitProcessor { ... @Override public void deploy(DeploymentPhaseContext phaseContext) throws DeploymentUnitProcessingException { String name = phaseContext.getDeploymentUnit().getName(); TrackerService service = getTrackerService(phaseContext.getServiceRegistry(), name); if (service != null) { ResourceRoot root = phaseContext.getDeploymentUnit().getAttachment(Attachments.DEPLOYMENT_ROOT); VirtualFile cool = root.getRoot().getChild("META-INF/cool.txt"); service.addDeployment(name); if (cool.exists()) { service.addCoolDeployment(name); } } } @Override public void undeploy(DeploymentUnit context) { context.getServiceRegistry(); String name = context.getName(); TrackerService service = getTrackerService(context.getServiceRegistry(), name); if (service != null) { service.removeDeployment(name); } } private TrackerService getTrackerService(ServiceRegistry registry, String name) { int last = name.lastIndexOf("."); String suffix = name.substring(last + 1); ServiceController<?> container = registry.getService(TrackerService.createServiceName(suffix)); if (container != null) { TrackerService service = (TrackerService)container.getValue(); return service; } return null; } }
The deploy() method is called when a deployment is being deployed. In this case we look for the TrackerService instance for the service name created from the deployment's suffix. If there is one it means that we are meant to be tracking deployments with this suffix (i.e. TypeAddHandler was called for this suffix), and if we find one we add the deployment's name to it. Similarly undeploy() is called when a deployment is being undeployed, and if there is a TrackerService instance for the deployment's suffix, we remove the deployment's name from it.
The code in the SubsystemDeploymentProcessor uses an attachment, which is the means of communication between the individual deployment processors. A deployment processor belonging to a phase may create an attachment which is then read further along the chain of deployment unit processors. In the above example we look for the Attachments.DEPLOYMENT_ROOT attachment, which is a view of the file structure of the deployment unit put in place before the chain of deployment unit processors is invoked.
As mentioned above, the deployment unit processors are organized in phases, and have a relative order within each phase. A deployment unit passes through all the deployment unit processors in that order. A deployment unit processor may choose to take action or not depending on what attachments are available. Let's take a quick look at what the deployment unit processors for in the phases described in org.jboss.as.server.deployment.Phase.
The deployment unit processors in this phase determine the structure of a deployment, and looks for sub deployments and metadata files.
In this phase the deployment unit processors parse the deployment descriptors and build up the annotation index. Class-Path entries from the META-INF/MANIFEST.MF are added.
Extra class path dependencies are added. For example if deploying a war file, the commonly needed dependencies for a web application are added.
In this phase the modular class loader for the deployment is created. No attempt should be made loading classes from the deployment until after this phase.
Now that our class loader has been constructed we have access to the classes. In this stage deployment processors may use the Attachments.REFLECTION_INDEX attachment which is a deployment index used to obtain members of classes in the deployment, and to invoke upon them, bypassing the inefficiencies of using java.lang.reflect directly.
Install new services coming from the deployment.
Attachments put in place earlier in the deployment unit processor chain may be removed here.
Couldn't find a page to include called: Integrate with JBoss AS 7
Expressions are mechanism that enables you to support variables in your attributes, for instance when you want the value of attribute to be resolved using system / environment properties.
An example expression is
${jboss.bind.address.management:127.0.0.1}
which means that the value should be taken from a system property named jboss.bind.address.management and if it is not defined use 127.0.0.1.
System properties, which are resolved using java.lang.System.getProperty(String key)
Environment properties, which are resolved using java.lang.System.getEnv(String name).
Security vault expressions, resolved against the security vault configured for the server or Host Controller that needs to resolve the expression.
In all cases, the syntax for the expression is
${expression_to_resolve}
For an expression meant to be resolved against environment properties, the expression_to_resolve must be prefixed with env.. The portion after env. will be the name passed to java.lang.System.getEnv(String name).
Security vault expressions do not support default values (i.e. the 127.0.0.1 in the jboss.bind.address.management:127.0.0.1 example above.)
The easiest way is by using AttributeDefinition, which provides support for expressions just by using it correctly.
When we create an AttributeDefinition all we need to do is mark that is allows expressions. Here is an example how to define an attribute that allows expressions to be used.
SimpleAttributeDefinition MY_ATTRIBUTE = new SimpleAttributeDefinitionBuilder("my-attribute", ModelType.INT, true) .setAllowExpression(true) .setFlags(AttributeAccess.Flag.RESTART_ALL_SERVICES) .setDefaultValue(new ModelNode(1)) .build();
Then later when you are parsing the xml configuration you should use the MY_ATTRIBUTE attribute definition to set the value to the management operation ModelNode you are creating.
.... String attr = reader.getAttributeLocalName(i); String value = reader.getAttributeValue(i); if (attr.equals("my-attribute")) { MY_ATTRIBUTE.parseAndSetParameter(value, operation, reader); } else if (attr.equals("suffix")) { .....
Note that this just helps you to properly set the value to the model node you are working on, so no need to additionally set anything to the model for this attribute. Method parseAndSetParameter parses the value that was read from xml for possible expressions in it and if it finds any it creates special model node that defines that node is of type ModelType.EXPRESSION.
Later in your operation handlers where you implement populateModel and have to store the value from the operation to the configuration model you also use this MY_ATTRIBUTE attribute definition.
@Override protected void populateModel(ModelNode operation, ModelNode model) throws OperationFailedException { MY_ATTRIBUTE.validateAndSet(operation,model); }
This will make sure that the attribute that is stored from the operation to the model is valid and nothing is lost. It also checks the value stored in the operation ModelNode, and if it isn't already ModelType.EXPRESSION, it checks if the value is a string that contains the expression syntax. If so, the value stored in the model will be of type ModelType.EXPRESSION. Doing this ensures that expressions are properly handled when they appear in operations that weren't created by the subsystem parser, but are instead passed in from CLI or admin console users.
As last step we need to use the value of the attribute. This is usually needed inside of the performRuntime method
protected void performRuntime(OperationContext context, ModelNode operation, ModelNode model, ServiceVerificationHandler verificationHandler, List<ServiceController<?>> newControllers) throws OperationFailedException { .... final int attributeValue = MY_ATTRIBUTE.resolveModelAttribute(context, model).asInt(); ... }
As you can see resolving of attribute's value is not done until it is needed for use in the subsystem's runtime services. The resolved value is not stored in the configuration model, the unresolved expression is. That way we do not lose any information in the model and can assure that also marshalling is done properly, where we must marshall back the unresolved value.
Attribute definitinon also helps you with that:
public void writeContent(XMLExtendedStreamWriter writer, SubsystemMarshallingContext context) throws XMLStreamException { .... MY_ATTRIBUTE.marshallAsAttribute(sessionData, writer); MY_OTHER_ATTRIBUTE.marshallAsElement(sessionData, false, writer); ... }
In the first major section of this guide, we provided an example of how to implement an extension to the AS. The emphasis there was learning by doing. In this section, we'll focus a bit more on the major WildFly 8 interfaces and classes that most are relevant to extension developers. The best way to learn about these interfaces and classes in detail is to look at their javadoc. What we'll try to do here is provide a brief introduction of the key items and how they relate to each other.
Before digging into this section, readers are encouraged to read the "Core Management Concepts" section of the Admin Guide.
The org.jboss.as.controller.Extension interface is the hook by which your extension to the core AS is able to integrate with the AS. During boot of the AS, when the <extension> element in the AS's xml configuration file naming your extension is parsed, the JBoss Modules module named in the element's name attribute is loaded. The standard JDK java.lang.ServiceLoader mechanism is then used to load your module's implementation of this interface.
The function of an Extension implementation is to register with the core AS the management API, xml parsers and xml marshallers associated with the extension module's subsystems. An Extension can register multiple subsystems, although the usual practice is to register just one per extension.
Once the Extension is loaded, the core AS will make two invocations upon it:
void initializeParsers(ExtensionParsingContext context)
When this is invoked, it is the Extension implementation's responsibility to initialize the XML parsers for this extension's subsystems and register them with the given ExtensionParsingContext. The parser's job when it is later called is to create org.jboss.dmr.ModelNode objects representing WildFly management API operations needed make the AS's running configuration match what is described in the xml. Those management operation {{ModelNode}}s are added to a list passed in to the parser.
A parser for each version of the xml schema used by a subsystem should be registered. A well behaved subsystem should be able to parse any version of its schema that it has ever published in a final release.
void initialize(ExtensionContext context)
When this is invoked, it is the Extension implementation's responsibility to register with the core AS the management API for its subsystems, and to register the object that is capable of marshalling the subsystem's in-memory configuration back to XML. Only one XML marshaller is registered per subsystem, even though multiple XML parsers can be registered. The subsystem should always write documents that conform to the latest version of its XML schema.
The registration of a subsystem's management API is done via the ManagementResourceRegistration interface. Before discussing that interface in detail, let's describe how it (and the related Resource interface) relate to the notion of managed resources in the AS.
Each subsystem is responsible for managing one or more management resources. The conceptual characteristics of a management resource are covered in some detail in the Admin Guide; here we'll just summarize the main points. A management resource has
An address consisting of a list of key/value pairs that uniquely identifies a resource
Zero or more attributes, the value of which is some sort of org.jboss.dmr.ModelNode
Zero or more supported operations. An operation has a string name and zero or more parameters, each of which is a key/value pair where the key is a string naming the parameter and the value is some sort of ModelNode
Zero or children, each of which in turn is a managed resource
The implementation of a managed resource is somewhat analogous to the implementation of a Java object. A managed resource will have a "type", which encapsulates API information about that resource and logic used to implement that API. And then there are actual instances of the resource, which primarily store data representing the current state of a particular resource. This is somewhat analogous to the "class" and "object" notions in Java.
A managed resource's type is encapsulated by the org.jboss.as.controller.registry.ManagementResourceRegistration the core AS creates when the type is registered. The data for a particular instance is encapsulated in an implementation of the org.jboss.as.controller.registry.Resource interface.
TODO
TODO
Most commonly used implementation: SimpleResourceDefinition
TODO
Most commonly used implementation: StandardResourceDescriptionResolver
TODO
Most commmonly used implementation: SimpleAttributeDefinition. Use SimpleAttributeDefinitionBuilder to build.
TODO
TODO
TODO
TODO
TODO
There are several guides in the WildFly 9 documentation series. This list gives an overview of each of the guides:
*Getting Started Guide - Explains how to download and start WildFly 9.
*Getting Started Developing Applications Guide - Talks you through developing your first applications on WildFly 9, and introduces you to JBoss Tools and how to deploy your applications.
*JavaEE 6 Tutorial - A Java EE 6 Tutorial.
*Admin Guide - Tells you how to configure and manage your WildFly 9 instances.
*Developer Guide - Contains concepts that you need to be aware of when developing applications for WildFly 9. Classloading is explained in depth.
*High Availability Guide - Reference guide for how to set up clustered WildFly 9 instances.
*Extending WildFly - A guide to adding new functionality to WildFly 9.